Goto

Collaborating Authors

 google ai


Antisocial Analagous Behavior, Alignment and Human Impact of Google AI Systems: Evaluating through the lens of modified Antisocial Behavior Criteria by Human Interaction, Independent LLM Analysis, and AI Self-Reflection

Ogilvie, Alan D.

arXiv.org Artificial Intelligence

Google AI systems exhibit patterns mirroring antisocial personality disorder (ASPD), consistent across models from Bard on PaLM to Gemini Advanced, meeting 5 out of 7 ASPD modified criteria. These patterns, along with comparable corporate behaviors, are scrutinized using an ASPD-inspired framework, emphasizing the heuristic value in assessing AI's human impact. Independent analyses by ChatGPT 4 and Claude 3.0 Opus of the Google interactions, alongside AI self-reflection, validate these concerns, highlighting behaviours analogous to deceit, manipulation, and safety neglect. The analogy of ASPD underscores the dilemma: just as we would hesitate to entrust our homes or personal devices to someone with psychopathic traits, we must critically evaluate the trustworthiness of AI systems and their creators.This research advocates for an integrated AI ethics approach, blending technological evaluation, human-AI interaction, and corporate behavior scrutiny. AI self-analysis sheds light on internal biases, stressing the need for multi-sectoral collaboration for robust ethical guidelines and oversight. Given the persistent unethical behaviors in Google AI, notably with potential Gemini integration in iOS affecting billions, immediate ethical scrutiny is imperative. The trust we place in AI systems, akin to the trust in individuals, necessitates rigorous ethical evaluation. Would we knowingly trust our home, our children or our personal computer to human with ASPD.? Urging Google and the AI community to address these ethical challenges proactively, this paper calls for transparent dialogues and a commitment to higher ethical standards, ensuring AI's societal benefit and moral integrity. The urgency for ethical action is paramount, reflecting the vast influence and potential of AI technologies in our lives.


Google AI learns to play open-world video games by watching them

New Scientist

A Google DeepMind artificial intelligence model can play different open-world video games including No Man's Sky like a human, by watching video from a screen, which could be a step towards generally intelligent AIs that operate in the corporeal world. Playing video games has long been a way to test the progress of AI systems, such as Google DeepMind's AI mastery of chess or Go, but these games have obvious ways to win or lose, making it relatively straightforward to train an AI to succeed at them. Open-world games with extraneous information that can be ignored and more abstract objectives, such as Minecraft, are harder for AI systems to crack. Because the array of choices available in the games makes them a little more like normal life, they are thought to be an important stepping stone towards training AI agents that could do jobs in the real world, such as control robots, and artificial general intelligence. Now, researchers at Google DeepMind have developed an AI they call a Scalable Instructable Multiworld Agent, or SIMA, which can play nine different video games and virtual environments it hasn't seen before using just the video feed from the game.


Google AI can now answer your questions about uncaptioned images

Engadget

Google's latest accessibility features include a potentially clever use of AI. The company is updating its Lookout app for Android with an "image question and answer" feature that uses DeepMind-developed AI to elaborate on descriptions of images with no captions or alt text. If the app sees a dog, for example, you can ask (via typing or voice) if that pup is playful. Google is inviting a handful of people with blindness and low vision to test the feature, with plans to expand the audience "soon." It will also be easier to get around town if you use a wheelchair -- or a stroller, for that matter.


Google AI is coming whether you like it or not. Here's what to watch.

Washington Post - Technology News

One example from Google: If you're shopping for a bicycle for your hilly five-mile commute, you might see an AI-organized list of bikes with photos and descriptions of their notable features. You don't need to dig into a bunch of links to compile that information on your own.


That wasn't Google I/O -- it was Google AI

MIT Technology Review

At Google in 2023, it seems pretty clear that AI itself now is the core product. As my colleague Melissa Heikkilä put it in her report on the company's efforts: Google is throwing generative AI at everything. The company made this point in one demo after another, all morning long. A Gmail demo showed how generative AI can compose an elaborate email to an airline to help you get a refund. The new Magic Editor in Google Photos will not only remove unwanted elements but reposition people and objects in photos, make the sky brighter and bluer, and then adjust the lighting in the photo so that all that doctoring looks natural.


Wendy's hired Google AI to take your drive-through order

PCWorld

The next time you get a craving for a spicy chicken sandwich, you might have to ask an AI to ring it up for you. American fast food burger chain Wendy's is teaming up with Google to make an AI-powered chatbot for taking drive-through food orders, debuting in a single restaurant in Columbus, Ohio in June. According to company representatives, the idea is to streamline the ordering process and speed up the drive-through experience. If the idea of ordering your food from a speak-and-respond computer program sounds like a headache, you're not alone. But the Wall Street Journal quotes Wendy's CIO Kevin Vasconi, who says that the Google-developed AI system is "at least as good as our best customer service representative, probably on average better."


Bard: how Google's chatbot gave me a comedy of errors

The Guardian

In June 2022, the Google engineer Blake Lemoine was suspended from his job after he spoke out about his belief that the company's LaMDA chatbot was sentient. "LaMDA is a sweet kid who just wants to help the world be a better place for all of us," Lemoine said in a parting email to colleagues. Now, six months on, the chatbot that he risked his career to free has been released to the public in the form of Bard, Google's answer to OpenAI's ChatGPT and Microsoft's Bing Chat. While Bard is built on top of LaMDA, it's not exactly the same. Google has worked hard, it says, to ensure that Bard does not repeat the flaws of earlier systems.


Google AI and Tel Aviv Researchers Introduce FriendlyCore: A Machine Learning Framework For Computing Differentially Private Aggregations - MarkTechPost

#artificialintelligence

Data analysis revolves around the central goal of aggregating metrics. The aggregation should be conducted in secret when the data points match personally identifiable information, such as the records or activities of specific users. Differential privacy (DP) is a method that restricts each data point's impact on the conclusion of the computation. Hence it has become the most frequently acknowledged approach to individual privacy. Although differentially private algorithms are theoretically possible, they are typically less efficient and accurate in practice than their non-private counterparts.


Try Language Models with Python: Google AI's Flan-T5

#artificialintelligence

Do you want to use a natural language model comparable to GPT-3 for free? Do you want to try a natural language model published by Google? In such cases, I recommend Flan-T5. This article describes Flan-T5, a great language model developed by Google.


First Trillion Parameter Model on HuggingFace - Mixture of Experts (MoE)

#artificialintelligence

Google AI's Switch Transformers model, a Mixture of Experts (MoE) model, that was released a few months ago is now available on HuggingFace. The model scales up to 1.6 trillion parameters and is now openly accessible. Click here to check out the model on HuggingFace. MoE models are considered to be the next step of Natural Language Processing (NLP) architectures that have highly efficient scalable properties. The architecture is considered similar to the classic T5 model, with a Feed Forward layer getting replaced by a Sparse Feed Forward Layer.